Supplementary material for: Convergence Rate Analysis of MAP Coordinate Minimization Algorithms

نویسندگان

  • Ofer Meshi
  • Tommi Jaakkola
  • Amir Globerson
چکیده

Proof. ‖∇F (δ)‖∞ ≤ guarantees that μ = μ(δ) are -consistent in the sense that |μi(xi) − μc(xi)| ≤ for all c, i ∈ c and xi. Algorithm 1 maps any such -consistent μ to locally consistent marginals μ̃ such that |μi(xi)− μ̃i(xi)| ≤ 3 Nmax, |μc(xc)− μ̃c(xc)| ≤ 2 N max, (3) for all i, xi, c, and xc, where Nmax = max{maxiNi,maxcNc}. In other words, ‖μ− μ̃‖∞ ≤ K . This can be easily derived from the update in Algorithm 1 and the fact that |μi(xi)− μc(xi)| ≤ . Next, it can be shown that F (δ) = Pτ (μ(δ)). And it follows that P ∗ τ ≤ F (δ) ≤ Pτ (μ), where the first inequality follows from weak duality. Thus we have: P ∗ τ ≤ Pτ (μ) = μ · θ + 1 τ H(μ) = (μ̃+ μ− μ̃) · θ + 1 τ H(μ̃) + 1 τ (H(μ)−H(μ̃)) (4)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Rate Analysis of MAP Coordinate Minimization Algorithms

Finding maximum a posteriori (MAP) assignments in graphical models is an important task in many applications. Since the problem is generally hard, linear programming (LP) relaxations are often used. Solving these relaxations efficiently is thus an important practical problem. In recent years, several authors have proposed message passing updates corresponding to coordinate descent in the dual L...

متن کامل

Asynchronous Parallel Coordinate Minimization for MAP Inference

Finding the maximum a-posteriori (MAP) assignment is a central task for structured prediction. Since modern applications give rise to very large structured problem instances, there is increasing need for efficient solvers. In this work we propose to improve the efficiency of coordinate-minimization-based dual-decomposition solvers by running their updates asynchronously in parallel. In this cas...

متن کامل

Local Smoothness in Variance Reduced Optimization

We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA). For a large family of penalized empirical risk minimization problems, our methods exploit data dependent local smoothness of the loss functions near th...

متن کامل

Parallel Block Coordinate Minimization with Application to Group Regularized Regression

This paper proposes a method for parallel block coordinate-wise minimization for convex functions. Each iteration involves a first phase where n independent minimizations are performed over the n variable blocks, followed by a phase where the results of the first phase are coordinated to obtain the whole variable update. Convergence of the method to the global optimum is proved for functions co...

متن کامل

Random Coordinate Descent Methods for Minimizing Decomposable Submodular Functions

Submodular function minimization is a fundamental optimization problem that arises in several applications in machine learning and computer vision. The problem is known to be solvable in polynomial time, but general purpose algorithms have high running times and are unsuitable for large-scale problems. Recent work have used convex optimization techniques to obtain very practical algorithms for ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012